206 research outputs found
On Outage Behavior of Wideband Slow-Fading Channels
This paper investigates point-to-point information transmission over a
wideband slow-fading channel, modeled as an (asymptotically) large number of
independent identically distributed parallel channels, with the random channel
fading realizations remaining constant over the entire coding block. On the one
hand, in the wideband limit the minimum achievable energy per nat required for
reliable transmission, as a random variable, converges in probability to
certain deterministic quantity. On the other hand, the exponential decay rate
of the outage probability, termed as the wideband outage exponent,
characterizes how the number of parallel channels, {\it i.e.}, the
``bandwidth'', should asymptotically scale in order to achieve a target outage
probability at a target energy per nat. We examine two scenarios: when the
transmitter has no channel state information and adopts uniform transmit power
allocation among parallel channels; and when the transmitter is endowed with an
one-bit channel state feedback for each parallel channel and accordingly
allocates its transmit power. For both scenarios, we evaluate the wideband
minimum energy per nat and the wideband outage exponent, and discuss their
implication for system performance.Comment: Submitted to IEEE Transactions on Information Theor
Capacity Bounds for Relay Channels with Inter-symbol Interference and Colored Gaussian Noise
The capacity of a relay channel with inter-symbol interference (ISI) and
additive colored Gaussian noise is examined under an input power constraint.
Prior results are used to show that the capacity of this channel can be
computed by examining the circular degraded relay channel in the limit of
infinite block length. The current work provides single letter expressions for
the achievable rates with decodeand- forward (DF) and compress-and-forward (CF)
processing employed at the relay. Additionally, the cut-set bound for the relay
channel is generalized for the ISI/colored Gaussian noise scenario. All results
hinge on showing the optimality of the decomposition of the relay channel with
ISI/colored Gaussian noise into an equivalent collection of coupled parallel,
scalar, memoryless relay channels. The region of optimality of the DF and CF
achievable rates are also discussed. Optimal power allocation strategies are
also discussed for the two lower bounds and the cut-set upper bound. As the
maximizing power allocations for DF and CF appear to be intractable, the
desired cost functions are modified and then optimized. The resulting rates are
illustrated through the computation of numerical examples.Comment: 42 pages, 9 figure
Performance of PPM Multipath Synchronization in the Limit of Large Bandwidth
The acquisition, or synchronization, of the multipath profile for an
ultrawideband pulse position modulation (PPM) communication systems is
considered. Synchronization is critical for the proper operation of PPM based
For the multipath channel, it is assumed that channel gains are known, but path
delays are unknown. In the limit of large bandwidth, W, it is assumed that the
number of paths, L, grows. The delay spread of the channel, M, is proportional
to the bandwidth. The rate of growth of L versus M determines whether
synchronization can occur. It is shown that if L/sqrt(M) --> 0, then the
maximum likelihood synchronizer cannot acquire any of the paths and
alternatively if L/M --> 0, the maximum likelihood synchronizer is guaranteed
to miss at least one path.Comment: 11 pages, submitted to 2005 Allerton conference on communication,
control, and computin
Unimodality-Constrained Matrix Factorization for Non-Parametric Source Localization
Herein, the problem of simultaneous localization of multiple sources given a
number of energy samples at different locations is examined. The strategies do
not require knowledge of the signal propagation models, nor do they exploit the
spatial signatures of the source. A non-parametric source localization
framework based on a matrix observation model is developed. It is shown that
the source location can be estimated by localizing the peaks of a pair of
location signature vectors extracted from the incomplete energy observation
matrix. A robust peak localization algorithm is developed and shown to decrease
the source localization mean squared error (MSE) faster than O(1/M^1.5) with M
samples, when there is no measurement noise. To extract the source signature
vectors from a matrix with mixed energy from multiple sources, a
unimodality-constrained matrix factorization (UMF) problem is formulated, and
two rotation techniques are developed to solve the UMF efficiently. Our
numerical experiments demonstrate that the proposed scheme achieves similar
performance as the kernel regression baseline using only 1/5 energy measurement
samples in detecting a single source, and the performance gain is more
significant in the cases of detecting multiple sources
Cross-layer estimation and control for Cognitive Radio: Exploiting Sparse Network Dynamics
In this paper, a cross-layer framework to jointly optimize spectrum sensing
and scheduling in resource constrained agile wireless networks is presented. A
network of secondary users (SUs) accesses portions of the spectrum left unused
by a network of licensed primary users (PUs). A central controller (CC)
schedules the traffic of the SUs, based on distributed compressed measurements
collected by the SUs. Sensing and scheduling are jointly controlled to maximize
the SU throughput, with constraints on PU throughput degradation and SU cost.
The sparsity in the spectrum dynamics is exploited: leveraging a prior spectrum
occupancy estimate, the CC needs to estimate only a residual uncertainty vector
via sparse recovery techniques. The high complexity entailed by the POMDP
formulation is reduced by a low-dimensional belief representation via
minimization of the Kullback-Leibler divergence. It is proved that the
optimization of sensing and scheduling can be decoupled. A partially myopic
scheduling strategy is proposed for which structural properties can be proved
showing that the myopic scheme allocates SU traffic to likely idle spectral
bands. Simulation results show that this framework balances optimally the
resources between spectrum sensing and data transmission. This framework
defines sensing-scheduling schemes most informative for network control,
yielding energy efficient resource utilization.Comment: Submitted to IEEE Transactions on Cognitive Communications and
Networking (invited
Optimal Sensing and Data Estimation in a Large Sensor Network
An energy efficient use of large scale sensor networks necessitates
activating a subset of possible sensors for estimation at a fusion center. The
problem is inherently combinatorial; to this end, a set of iterative,
randomized algorithms are developed for sensor subset selection by exploiting
the underlying statistics. Gibbs sampling-based methods are designed to
optimize the estimation error and the mean number of activated sensors. The
optimality of the proposed strategy is proven, along with guarantees on their
convergence speeds. Also, another new algorithm exploiting stochastic
approximation in conjunction with Gibbs sampling is derived for a constrained
version of the sensor selection problem. The methodology is extended to the
scenario where the fusion center has access to only a parametric form of the
joint statistics, but not the true underlying distribution. Therein,
expectation-maximization is effectively employed to learn the distribution.
Strategies for iid time-varying data are also outlined. Numerical results show
that the proposed methods converge very fast to the respective optimal
solutions, and therefore can be employed for optimal sensor subset selection in
practical sensor networks.Comment: 9 page
Optimal Dynamic Sensor Subset Selection for Tracking a Time-Varying Stochastic Process
Motivated by the Internet-of-things and sensor networks for cyberphysical
systems, the problem of dynamic sensor activation for the tracking of a
time-varying process is examined. The tradeoff is between energy efficiency,
which decreases with the number of active sensors, and fidelity, which
increases with the number of active sensors. The problem of minimizing the
time-averaged mean-squared error over infinite horizon is examined under the
constraint of the mean number of active sensors. The proposed methods artfully
combine three key ingredients: Gibbs sampling, stochastic approximation for
learning, and modifications to consensus algorithms to create a high
performance, energy efficient tracking mechanisms with active sensor selection.
The following progression of scenarios are considered: centralized tracking of
an i.i.d. process; distributed tracking of an i.i.d. process and finally
distributed tracking of a Markov chain. The challenge of the i.i.d. case is
that the process has a distribution parameterized by a known or unknown
parameter which must be learned. The key theoretical results prove that the
proposed algorithms converge to local optima for the two i.i.d process cases;
numerical results suggest that global optimality is in fact achieved. The
proposed distributed tracking algorithm for a Markov chain, based on
Kalman-consensus filtering and stochastic approximation, is seen to offer an
error performance comparable to that of a competetive centralized Kalman
filter.Comment: This is an intermediate version. This will be updated soo
Capacity of electron-based communication over bacterial cables: the full-CSI case
Motivated by recent discoveries of microbial communities that transfer
electrons across centimeter-length scales, this paper studies the information
capacity of bacterial cables via electron transfer, which coexists with
molecular communications, under the assumption of full causal channel state
information (CSI). The bacterial cable is modeled as an electron queue that
transfers electrons from the encoder at the electron donor source, which
controls the desired input electron intensity, to the decoder at the electron
acceptor sink. Clogging due to local ATP saturation along the cable is modeled.
A discrete-time scheme is investigated, enabling the computation of an
achievable rate. The regime of asymptotically small time-slot duration is
analyzed, and the optimality of binary input distributions is proved, i.e., the
encoder transmits at either maximum or minimum intensity, as dictated by the
physical constraints of the cable. A dynamic programming formulation of the
capacity is proposed, and the optimal binary signaling is determined via policy
iteration. It is proved that the optimal signaling has smaller intensity than
that given by the myopic policy, which greedily maximizes the instantaneous
information rate but neglects its effect on the steady-state cable
distribution. In contrast, the optimal scheme balances the tension between
achieving high instantaneous information rate, and inducing a favorable
steady-state distribution, such that those states characterized by high
information rates are visited more frequently, thus revealing the importance of
CSI. This work represents a first contribution towards the design of electron
signaling schemes in complex microbial structures, e.g., bacterial cables and
biofilms, where the tension between maximizing the transfer of information and
guaranteeing the well-being of the overall bacterial community arises.Comment: submitted to IEEE Journal on Selected Areas in Communication
Security against false data injection attack in cyber-physical systems
In this paper, secure, remote estimation of a linear Gaussian process via
observations at multiple sensors is considered. Such a framework is relevant to
many cyber-physical systems and internet-of-things applications. Sensors make
sequential measurements that are shared with a fusion center; the fusion center
applies a certain filtering algorithm to make its estimates. The challenge is
the presence of a few unknown malicious sensors which can inject anomalous
observations to skew the estimates at the fusion center. The set of malicious
sensors may be time-varying. The problems of malicious sensor detection and
secure estimation are considered. First, an algorithm for secure estimation is
proposed. The proposed estimation scheme uses a novel filtering and learning
algorithm, where an optimal filter is learnt over time by using the sensor
observations in order to filter out malicious sensor observations while
retaining other sensor measurements. Next, a novel detector to detect injection
attacks on an unknown sensor subset is developed. Numerical results demonstrate
up to 3 dB gain in the mean squared error and up to 75% higher attack detection
probability under a small false alarm rate constraint, against a competing
algorithm that requires additional side information
Identifiability Scaling Laws in Bilinear Inverse Problems
A number of ill-posed inverse problems in signal processing, like blind
deconvolution, matrix factorization, dictionary learning and blind source
separation share the common characteristic of being bilinear inverse problems
(BIPs), i.e. the observation model is a function of two variables and
conditioned on one variable being known, the observation is a linear function
of the other variable. A key issue that arises for such inverse problems is
that of identifiability, i.e. whether the observation is sufficient to
unambiguously determine the pair of inputs that generated the observation.
Identifiability is a key concern for applications like blind equalization in
wireless communications and data mining in machine learning. Herein, a unifying
and flexible approach to identifiability analysis for general conic prior
constrained BIPs is presented, exploiting a connection to low-rank matrix
recovery via lifting. We develop deterministic identifiability conditions on
the input signals and examine their satisfiability in practice for three
classes of signal distributions, viz. dependent but uncorrelated, independent
Gaussian, and independent Bernoulli. In each case, scaling laws are developed
that trade-off probability of robust identifiability with the complexity of the
rank two null space. An added appeal of our approach is that the rank two null
space can be partly or fully characterized for many bilinear problems of
interest (e.g. blind deconvolution). We present numerical experiments involving
variations on the blind deconvolution problem that exploit a characterization
of the rank two null space and demonstrate that the scaling laws offer good
estimates of identifiability.Comment: 25 pages, 5 figure
- …